Goto

Collaborating Authors

 american sign language




How Pragmatics Shape Articulation: A Computational Case Study in STEM ASL Discourse

Imai, Saki, Kezar, Lee, Aichler, Laurel, Inan, Mert, Walker, Erin, Wooten, Alicia, Quandt, Lorna, Alikhani, Malihe

arXiv.org Artificial Intelligence

Most state-of-the-art sign language models are trained on interpreter or isolated vocabulary data, which overlooks the variability that characterizes natural dialogue. However, human communication dynamically adapts to contexts and interlocutors through spatiotemporal changes and articulation style. This specifically manifests itself in educational settings, where novel vocabularies are used by teachers, and students. To address this gap, we collect a motion capture dataset of American Sign Language (ASL) STEM (Science, Technology, Engineering, and Mathematics) dialogue that enables quantitative comparison between dyadic interactive signing, solo signed lecture, and interpreted articles. Using continuous kinematic features, we disentangle dialogue-specific entrainment from individual effort reduction and show spatiotemporal changes across repeated mentions of STEM terms. On average, dialogue signs are 24.6%-44.6% shorter in duration than the isolated signs, and show significant reductions absent in monologue contexts. Finally, we evaluate sign embedding models on their ability to recognize STEM signs and approximate how entrained the participants become over time. Our study bridges linguistic analysis and computational modeling to understand how pragmatics shape sign articulation and its representation in sign language technologies.




PopSign ASL v1.0: An Isolated American Sign Language Dataset Collected via Smartphones

Neural Information Processing Systems

PopSign is a smartphone-based bubble-shooter game that helps hearing parents of deaf infants learn sign language. To help parents practice their ability to sign, PopSign is integrating sign language recognition as part of its gameplay.


Challenges and opportunities in portraying emotion in generated sign language

McDonald, John C., Wolfe, Rosalee, Nunnari, Fabrizio

arXiv.org Artificial Intelligence

Non-manual signals in sign languages continue to be a challenge for signing avatars. More specifically, emotional content has been difficult to incorporate because of a lack of a standard method of specifying the avatar's emotional state. This paper explores the application of an intuitive two-parameter representation for emotive non-manual signals to the Paula signing avatar that shows promise for facilitating the linguistic specification of emotional facial expressions in a more coherent manner than previous methods. Users can apply these parameters to control Paula's emotional expressions through a textual representation called the EASIER notation. The representation can allow avatars to express more nuanced emotional states using two numerical parameters. It also has the potential to enable more consistent specification of emotional non-manual signals in linguistic annotations which drive signing avatars.


AI ring tracks spelled words in American Sign Language

AIHub

A Cornell-led research team has developed an artificial intelligence-powered ring equipped with micro-sonar technology that can continuously and in real time track fingerspelling in American Sign Language (ASL). In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling, which is used in ASL to spell out words without corresponding signs, such as proper nouns, names and technical terms. With further development, the device could potentially be used to continuously track entire signed words and sentences. "Many other technologies that recognize fingerspelling in ASL have not been adopted by the deaf and hard-of-hearing community because the hardware is bulky and impractical," said Hyunchul Lim, a doctoral student in the field of information science. "We sought to develop a single ring to capture all of the subtle and complex finger movement in ASL." Lim is lead author of "SpellRing: Recognizing Continuous Fingerspelling in American Sign Language using a Ring," which will be presented at the Association of Computing Machinery's conference on Human Factors in Computing Systems (CHI), April 26-May 1 in Yokohama, Japan.


Enhanced Sign Language Translation between American Sign Language (ASL) and Indian Sign Language (ISL) Using LLMs

Kumar, Malay, Visagan, S. Sarvajit, Mahajan, Tanish Sarang, Natarajan, Anisha

arXiv.org Artificial Intelligence

We have come up with a research that hopes to provide a bridge between the users of American Sign Language and the users of spoken language and Indian Sign Language (ISL). The research enabled us to create a novel framework that we have developed for Learner Systems. Leveraging art of Large models to create key features including: - Real-time translation between these two sign languages in an efficient manner. Making LLM's capability available for seamless translations to ISL. Here is the full study showing its implementation in this paper. The core of the system is a sophisticated pipeline that begins with reclassification and recognition of ASL gestures based on a strong Random Forest Classifier. By recognizing the ASL, it is translated into text which can be more easily processed. Highly evolved natural language NLP (Natural Language Processing) techniques come in handy as they play a role in our LLM integration where you then use LLMs to be able to convert the ASL text to ISL which provides you with the intent of sentence or phrase. The final step is to synthesize the translated text back into ISL gestures, creating an end-to-end translation experience using RIFE-Net. This framework is tasked with key challenges such as automatically dealing with gesture variability and overcoming the linguistic differences between ASL and ISL. By automating the translation process, we hope to vastly improve accessibility for sign language users. No longer will the communication gap between ASL and ISL create barriers; this totally cool innovation aims to bring our communities closer together. And we believe, with full confidence in our framework, that we're able to apply the same principles across a wide variety of sign language dialects.


American Sign Language to Text Translation using Transformer and Seq2Seq with LSTM

Putra, Gregorius Guntur Sunardi, D'Layla, Adifa Widyadhani Chanda, Wahono, Dimas, Sarno, Riyanarto, Haryono, Agus Tri

arXiv.org Artificial Intelligence

Sign language translation is one of the important issues in communication between deaf and hearing people, as it expresses words through hand, body, and mouth movements. American Sign Language is one of the sign languages used, one of which is the alphabetic sign. The development of neural machine translation technology is moving towards sign language translation. Transformer became the state-of-the-art in natural language processing. This study compares the Transformer with the Sequence-to-Sequence (Seq2Seq) model in translating sign language to text. In addition, an experiment was conducted by adding Residual Long Short-Term Memory (ResidualLSTM) in the Transformer. The addition of ResidualLSTM to the Transformer reduces the performance of the Transformer model by 23.37% based on the BLEU Score value. In comparison, the Transformer itself increases the BLEU Score value by 28.14 compared to the Seq2Seq model.